China’s AI Firms Go Offshore: Training in SEA to Tap Nvidia Power
Major Chinese tech firms are quietly relocating their most advanced artificial-intelligence (AI) model training operations to data centres across Southeast Asia — a strategic move aimed at preserving access to high-end chips from Nvidia despite U.S. export restrictions. (Reuters)
✈️ Moving AI “Heavy Lifting” Overseas
According to a report by Financial Times, firms such as Alibaba and ByteDance have shifted large-language model (LLM) training workloads to foreign-owned data centres, primarily in Southeast Asia. (Reuters)
The shift follows U.S. export controls imposed in April 2025 on Nvidia’s H20 chips — hardware widely used for AI training. By leasing space in overseas data centres operated by non-Chinese entities, these firms circumvent restrictions without directly importing the forbidden chips. (Reuters)
Notably, the Chinese AI company DeepSeek stands apart: it had pre-emptively stockpiled Nvidia chips before the ban and continues domestic training. Additionally, DeepSeek is working with domestic chipmakers — notably Huawei — to develop China’s next-gen AI hardware. (Reuters)
Why This Matters
- A workaround to tech controls: This offshore strategy highlights how global supply chains and data-centre infrastructure enable companies to skirt hardware embargoes — raising questions about the efficacy of export restrictions in a globalized digital economy. (Technology.org)
- Implications for Nvidia and chip geopolitics: China’s tech firms’ continued access to cutting-edge chips ensures they can remain competitive in the global AI race, even as the hardware arms race intensifies. At the same time, the shift accelerates efforts within China to build self-reliant semiconductor capabilities. (The Information)
- A regional AI-compute boost: Southeast Asia — with its developed data-infrastructure and relative regulatory neutrality — emerges as a quietly important centre for global AI computing, beyond China or the US.
⚠️ Key Constraints and Trade-offs
- While training moves overseas, Chinese regulations mean that AI models relying on private domestic data must stay within China — limiting this offshore work mostly to “generic” model training or publicly available data. (The Irish Times)
- The arrangement depends on trust and compliance: leasing foreign-operated data centres rather than owning them directly — a subtlety that highlights how regulatory and legal definitions shape what’s permissible. (Reuters)
What It Means Going Forward
This story is likely the beginning of a broader pattern: Chinese firms balancing between using foreign hardware for performance and building domestic chip ecosystems for long-term independence. Entities like DeepSeek partnering with domestic chipmakers suggest a parallel track: short-term workarounds and long-term strategic autonomy.
For the global AI community (and for you, Sheng, given your interest in ML and technical investing), it signals persistent fragmentation: the “hardware-software stack” for AI might diverge across regions — with supply-chain politics increasingly shaping who gets to build what models, and where.
Glossary
- Large Language Model (LLM): A type of AI model (e.g., like GPT) trained on huge datasets of text to generate human-like language.
- Chip export restrictions: Government rules that prohibit or limit the sale of advanced hardware (like AI chips) to particular countries, often for national-security reasons.
- Inference vs. training: Training is when a model learns from data; inference is when the trained model is used to make predictions or respond to user prompts.